Some Correct Error-driven Versions of the Constraint Demotion Algorithm
نویسنده
چکیده
This paper shows that Error-Driven Constraint Demotion (EDCD), an errordriven learning algorithm proposed by Tesar (1995) for Prince and Smolensky’s (1993) version of Optimality Theory, can fail to converge to a totally ranked hierarchy of constraints, unlike the earlier non-error-driven learning algorithms proposed by Tesar and Smolensky (1993). The cause of the problem is found in Tesar’s use of “mark-pooling ties”, indicating that EDCD can be repaired by assuming Anttila’s (1997) “permuting ties” instead. Simulations show that totally ranked hierarchies can indeed be found by both this repaired version of EDCD and Boersma’s (1998) Minimal Gradual Learning Algorithm.
منابع مشابه
Convergence of Error-driven Ranking Algorithms
According to the OT error-driven ranking model of language acquisition, the learner performs a sequence of slight re-rankings triggered by mistakes on the incoming stream of data, until it converges to a ranking that makes no more mistakes. This learning model is very popular in the OT acquisition literature, in particular because it predicts a sequence of rankings that models gradualness in ch...
متن کاملRemarks and Replies
This article shows that Error-Driven Constraint Demotion (EDCD), an error-driven learning algorithm proposed by Tesar (1995) for Prince and Smolensky’s (1993/2004) version of Optimality Theory, can fail to converge to a correct totally ranked hierarchy of constraints, unlike the earlier non-error-driven learning algorithms proposed by Tesar and Smolensky (1993). The cause of the problem is foun...
متن کاملTools for the robust analysis of error-driven ranking algorithms and their implications for modelling the child's acquisition of phonotactics
Error-driven ranking algorithms (EDRAs) perform a sequence of slight re-rankings of the constraint set triggered by mistakes on the incoming stream of data. In general, the sequence of rankings entertained by the algorithm, and in particular the final ranking entertained at convergence, depend not only on the grammar the algorithm is trained on, but also on the specific way data are sampled fro...
متن کاملGoldilocks Meets the Subset Problem: Evaluating Error Driven Constraint Demotion (RIP/CD) for OT language acquisition
متن کامل
Stages of Phonological Acquisition and Error-Selective Learning
This paper presents an error-driven model of Optimality-Theoretic acquisition, called ErrorSelective Learning, which is both restrictive and gradual. It is restrictive in that it chooses grammars that can generate observed outputs but as few others as possible, using a version of Biased Constraint Demotion (BCD: Prince and Tesar, 2004). It is gradual, unlike a pure BCD learner, in that it requi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008